🤔 That’s questionable

Designing and deploying effective models for generating multiple versions of auto-marked questions

2024-12-02

Scan for slides

or go to link.lizabolton.com



Presented at NZSA 2024 @ Te Herenga Waka Victoria University of Wellington

Funding

This project has received funding from the Faculty of Science Scholarship of Teaching and Learning Fund.

Ethics

This study was approved by the University of Auckland Human Participants Ethics Committee (ref: UAHPEC27494).

Course context

  • STATS 101/108 is an introductory statistics course
    • an introduction to using data to learn, identify and solve problems, make decisions, and communicate
  • 1600 to ~2000 students in Semesters 1 and 2 + a summer school offering
  • Required for programmes in business and psychology
  • 👩🏻‍🏫 Teaching teams of 4–6 lecturers, 8–10 help room tutors and 15–20 markers
  • Major redesign in 2023

Redesign components

Aims

This talk has three aims:

  1. to explore design principles that support pedagogy-first approaches to creating question-generating models,
  2. to share considerations and opportunities with respect to having students analyse data (with iNZight Lite) to answer quiz questions, and
  3. to report on how students are actually using quizzes with multiple versions in a large introductory statistics course, including findings based on data about quiz attempts, as well as reflections from the teaching team.

Motivation

Context

Introduction to Statistics

  • describe the STATS 101/108 context
  • breifly mention Anna’s work in STATS 220, and future development for BIOSCI (Charlotte)

Demonstration

To do?

  • Create a little toy demo in free Canvas
  • Synthetic data based on the actual data we have about how students use the quiz would be a fun meta here
  • Give the link to iNZight also
  • This wouldn’t have versions

This part will meet the aim of iNZight considerations and opportunities and has the benefit of introducing some of ideas for aim 3.

Design principles & pedagogy

Question types

Automarked questions tend to come in the following types:

  • Identify if a statement is TRUE or FALSE
  • Complete a statement with the right drop down
  • Write a number
  • Write a word

How a student might get that answer: - Recall (a year, a rule of thumb) - Identify the number from text - Perform a calculation - Interact with data :::notes T or F might come with a bunch of distractors, and that is where historically a lot of question writing work has gone

:::

Question generating models

So…how does it work in practice?

To do

  • Do some analyses on the attempt data (check if profiles team has anything fun to add and credit)
  • Call back to some of this built out into the quiz

Reflections

Student use these quizzes for revision quite effectively.

Some student will brute force it to try to get their 10/10 without understanding fully what they’re supposed to be doing.

Makes it easier on us when writing portions of the tests and exams as we can draw on styles of question they’re very familiar with but put them in the new data context.

Conclusions

FAQ

Does this ruin your class average becasue thet all get 10s? Nope.

Do you do this for the test and exam? Nope. The current tech support drama of one testing platform is more than enough for high-stakes assessments, so we don’t have students use iNZight in the test and exam

Next steps

  • 📦 Working towards a package that would make it easier to write these question generating models in R and then set up the components appropriately for Canvas, Inspera, HTML, etc.

  • 👥 Understanding student interaction profiles

The bigger project

Liza to go look at the ethics

Looking at design and looking at what’s coming out of the use of tools, how do we

Slides: link.lizabolton.com